Goto

Collaborating Authors

 bayesian persuasion





Computational Aspects of Bayesian Persuasion under Approximate Best Response

Neural Information Processing Systems

We study Bayesian persuasion under approximate best response, where the receiver may choose any action that is not too much suboptimal, given their posterior belief upon receiving the signal. We focus on the computational aspects of the problem, aiming to design algorithms that efficiently compute (almost) optimal strategies for the sender. Despite the absence of the revelation principle --- which has been one of the most powerful tools in Bayesian persuasion --- we design polynomial-time exact algorithms for the problem when either the state space or the action space is small, as well as a quasi-polynomial-time approximation scheme (QPTAS) for the general problem. On the negative side, we show there is no polynomial-time exact algorithm for the general problem unless $\mathsf{P} = \mathsf{NP}$. Our results build on several new algorithmic ideas, which might be useful in other principal-agent problems where robustness is desired.


Bayesian Persuasion for Algorithmic Recourse

Neural Information Processing Systems

When subjected to automated decision-making, decision subjects may strategically modify their observable features in ways they believe will maximize their chances of receiving a favorable decision. In many practical situations, the underlying assessment rule is deliberately kept secret to avoid gaming and maintain competitive advantage. The resulting opacity forces the decision subjects to rely on incomplete information when making strategic feature modifications. We capture such settings as a game of Bayesian persuasion, in which the decision maker offers a form of recourse to the decision subject by providing them with an action recommendation (or signal) to incentivize them to modify their features in desirable ways. We show that when using persuasion, the decision maker and decision subject are never worse off in expectation, while the decision maker can be significantly better off. While the decision maker's problem of finding the optimal Bayesian incentive compatible (BIC) signaling policy takes the form of optimization over infinitely many variables, we show that this optimization can be cast as a linear program over finitely-many regions of the space of possible assessment rules. While this reformulation simplifies the problem dramatically, solving the linear program requires reasoning about exponentially-many variables, even in relatively simple cases. Motivated by this observation, we provide a polynomial-time approximation scheme that recovers a near-optimal signaling policy. Finally, our numerical simulations on semi-synthetic data empirically demonstrate the benefits of using persuasion in the algorithmic recourse setting.


Persuading Stable Matching

Shaki, Jonathan, Gan, Jiarui, Kraus, Sarit

arXiv.org Artificial Intelligence

In bipartite matching problems, agents on two sides of a graph want to be paired according to their preferences. The stability of a matching depends on these preferences, which in uncertain environments also reflect agents' beliefs about the underlying state of the world. We investigate how a principal -- who observes the true state of the world -- can strategically shape these beliefs through Bayesian persuasion to induce stable matching that maximizes a desired utility. Due to the general intractability of the underlying matching optimization problem as well as the multi-receiver persuasion problem, our main considerations are two important special cases: (1) when agents can be categorized into a small number of types based on their value functions, and (2) when the number of possible world states is small. For each case, we study both public and private signaling settings. Our results draw a complete complexity landscape: we show that private persuasion remains intractable even when the number of worlds is small, while all other settings admit polynomial-time algorithms. We present efficient algorithms for each tractable case and prove NP-hardness for the intractable ones. These results illuminate the algorithmic frontier of stable matching under information design and clarify when optimal persuasion is computationally feasible.





Towards Strategic Persuasion with Language Models

Cheng, Zirui, You, Jiaxuan

arXiv.org Artificial Intelligence

Large language models (LLMs) have demonstrated strong persuasive capabilities comparable to those of humans, offering promising benefits while raising societal concerns about their deployment. However, systematically evaluating the persuasive capabilities of LLMs is inherently challenging, as the effectiveness of persuasion among humans varies significantly across different domains. In this paper, we take a theory-driven approach to provide a scalable and principled framework for measuring the persuasive capabilities of LLMs. Grounded in the Bayesian Persuasion (BP) framework, we repurpose existing human-human persuasion datasets to construct environments for evaluating and training LLMs in strategic persuasion. Our results reveal that frontier models can consistently achieve high persuasion gains and exhibit sophisticated persuasion strategies that align with theoretical predictions. Building on this, we use reinforcement learning to train LLMs for strategic persuasion in our environments. Our results also demonstrate that even small LLMs can obtain significantly higher persuasion gains through reinforcement learning.